24 research outputs found
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Many tasks in machine learning and signal processing can be solved by
minimizing a convex function of a measure. This includes sparse spikes
deconvolution or training a neural network with a single hidden layer. For
these problems, we study a simple minimization method: the unknown measure is
discretized into a mixture of particles and a continuous-time gradient descent
is performed on their weights and positions. This is an idealization of the
usual way to train neural networks with a large hidden layer. We show that,
when initialized correctly and in the many-particle limit, this gradient flow,
although non-convex, converges to global minimizers. The proof involves
Wasserstein gradient flows, a by-product of optimal transport theory. Numerical
experiments show that this asymptotic behavior is already at play for a
reasonable number of particles, even in high dimension.Comment: Advances in Neural Information Processing Systems (NIPS), Dec 2018,
Montr\'eal, Canad
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activation and confirm the statistical benefits of this implicit bias
Scaling Algorithms for Unbalanced Transport Problems
This article introduces a new class of fast algorithms to approximate
variational problems involving unbalanced optimal transport. While classical
optimal transport considers only normalized probability distributions, it is
important for many applications to be able to compute some sort of relaxed
transportation between arbitrary positive measures. A generic class of such
"unbalanced" optimal transport problems has been recently proposed by several
authors. In this paper, we show how to extend the, now classical, entropic
regularization scheme to these unbalanced problems. This gives rise to fast,
highly parallelizable algorithms that operate by performing only diagonal
scaling (i.e. pointwise multiplications) of the transportation couplings. They
are generalizations of the celebrated Sinkhorn algorithm. We show how these
methods can be used to solve unbalanced transport, unbalanced gradient flows,
and to compute unbalanced barycenters. We showcase applications to 2-D shape
modification, color transfer, and growth models
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
International audienceMany tasks in machine learning and signal processing can be solved by minimizing a convex function of a measure. This includes sparse spikes deconvolution or training a neural network with a single hidden layer. For these problems, we study a simple minimization method: the unknown measure is discretized into a mixture of particles and a continuous-time gradient descent is performed on their weights and positions. This is an idealization of the usual way to train neural networks with a large hidden layer. We show that, when initialized correctly and in the many-particle limit, this gradient flow, although non-convex, converges to global minimizers. The proof involves Wasserstein gradient flows, a by-product of optimal transport theory. Numerical experiments show that this asymptotic behavior is already at play for a reasonable number of particles, even in high dimension
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
International audienceNeural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activation and confirm the statistical benefits of this implicit bias
On Lazy Training in Differentiable Programming
International audienceIn a series of recent theoretical works, it was shown that strongly over-parameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this "lazy training" phenomenon is not specific to over-parameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that "lazy training" is behind the many successes of neural networks in difficult high dimensional tasks